Chapter Appendix A: Common Recursive Patterns Cheat Sheet
First + Rest Pattern
First + Rest Pattern
The First + Rest pattern is the foundational approach for processing sequences recursively. It decomposes a list into its first element and the remaining elements, processes the first element, and recursively handles the rest.
Core Structure
The pattern follows this template:
def process_list(items):
# Base case: empty list
if not items:
return base_value
# Decompose: first element + rest of list
first = items[0]
rest = items[1:]
# Process first, recurse on rest, combine results
return combine(process(first), process_list(rest))
When to Use
- Linear processing: Each element needs individual attention
- Aggregation: Summing, counting, finding max/min
- Transformation: Building new lists element-by-element
- Searching: Finding elements or patterns in sequences
Reference Implementation: Sum of List
Let's establish a concrete example that demonstrates the pattern's evolution:
def sum_list(numbers):
"""Sum all numbers in a list using First + Rest pattern."""
# Base case: empty list sums to 0
if not numbers:
return 0
# Decompose
first = numbers[0]
rest = numbers[1:]
# Combine: first + sum of rest
return first + sum_list(rest)
# Test
print(sum_list([1, 2, 3, 4, 5])) # Output: 15
Execution trace:
sum_list([1, 2, 3, 4, 5])
ā first=1, rest=[2, 3, 4, 5]
ā 1 + sum_list([2, 3, 4, 5])
ā first=2, rest=[3, 4, 5]
ā 2 + sum_list([3, 4, 5])
ā first=3, rest=[4, 5]
ā 3 + sum_list([4, 5])
ā first=4, rest=[5]
ā 4 + sum_list([5])
ā first=5, rest=[]
ā 5 + sum_list([])
ā base case: return 0
ā 5 + 0 = 5
ā 4 + 5 = 9
ā 3 + 9 = 12
ā 2 + 12 = 14
ā 1 + 14 = 15
Variation 1: Building New Lists
The pattern adapts naturally to list construction:
def double_all(numbers):
"""Double each number in a list."""
if not numbers:
return []
first = numbers[0]
rest = numbers[1:]
# Build new list: [doubled_first] + doubled_rest
return [first * 2] + double_all(rest)
print(double_all([1, 2, 3])) # Output: [2, 4, 6]
Variation 2: Filtering
Conditional inclusion based on first element:
def keep_positive(numbers):
"""Keep only positive numbers."""
if not numbers:
return []
first = numbers[0]
rest = numbers[1:]
# Include first only if it meets condition
if first > 0:
return [first] + keep_positive(rest)
else:
return keep_positive(rest)
print(keep_positive([1, -2, 3, -4, 5])) # Output: [1, 3, 5]
Variation 3: Finding Maximum
Comparison-based aggregation:
def find_max(numbers):
"""Find maximum value in list."""
# Base case: single element is the max
if len(numbers) == 1:
return numbers[0]
first = numbers[0]
rest = numbers[1:]
# Compare first with max of rest
max_of_rest = find_max(rest)
return first if first > max_of_rest else max_of_rest
print(find_max([3, 1, 4, 1, 5, 9, 2])) # Output: 9
Complexity Characteristics
Time Complexity: O(n) - Each element visited exactly once - Single recursive call per element - Linear work at each level
Space Complexity: O(n) - Call stack depth: n - List slicing creates new lists: O(n) per call - Total space: O(n²) due to slicing overhead
Optimization: Index-Based Approach
Avoid list slicing overhead by using indices:
def sum_list_optimized(numbers, index=0):
"""Sum using index instead of slicing."""
# Base case: reached end
if index >= len(numbers):
return 0
# Process current index, recurse on next
return numbers[index] + sum_list_optimized(numbers, index + 1)
print(sum_list_optimized([1, 2, 3, 4, 5])) # Output: 15
Improved space complexity: O(n) call stack only, no slicing overhead.
Decision Framework
| Use First + Rest When | Avoid When |
|---|---|
| Processing each element individually | Need random access to elements |
| Building results incrementally | Working with very large lists (stack depth) |
| Logic naturally splits into "handle one, recurse on rest" | Simple iteration is clearer |
| Teaching/learning recursion | Performance is critical (use iteration) |
Common Pitfalls
Pitfall 1: Forgetting empty list base case
def broken_sum(numbers):
first = numbers[0] # ā IndexError on empty list!
rest = numbers[1:]
return first + broken_sum(rest)
Pitfall 2: Wrong base case return value
def broken_product(numbers):
if not numbers:
return 0 # ā Wrong! Should return 1 (multiplicative identity)
# ...
Pitfall 3: Inefficient list slicing in loops
# Creates O(n²) intermediate lists
def process(items):
if not items:
return []
return [items[0] * 2] + process(items[1:]) # ā Expensive slicing
Quick Reference Template
def first_rest_template(items):
"""Template for First + Rest pattern."""
# 1. Base case: empty sequence
if not items:
return base_value # Choose appropriate base
# 2. Decompose
first = items[0]
rest = items[1:]
# 3. Process first
processed_first = process(first)
# 4. Recurse on rest
processed_rest = first_rest_template(rest)
# 5. Combine results
return combine(processed_first, processed_rest)
Divide and Conquer Pattern
Divide and Conquer Pattern
Divide and Conquer splits a problem into roughly equal halves, solves each half recursively, and combines the results. This pattern achieves logarithmic recursion depth, making it efficient for large inputs.
Core Structure
The pattern follows this three-phase template:
def divide_and_conquer(data):
# Base case: problem small enough to solve directly
if len(data) <= threshold:
return solve_directly(data)
# Divide: split into roughly equal parts
mid = len(data) // 2
left_half = data[:mid]
right_half = data[mid:]
# Conquer: solve subproblems recursively
left_result = divide_and_conquer(left_half)
right_result = divide_and_conquer(right_half)
# Combine: merge results
return combine(left_result, right_result)
When to Use
- Halving makes sense: Problem naturally splits into independent subproblems
- Logarithmic depth desired: O(log n) recursion depth vs O(n) for First + Rest
- Parallel potential: Subproblems can be solved independently
- Classic algorithms: Binary search, merge sort, quick sort
Reference Implementation: Binary Search
Let's establish the canonical divide-and-conquer example:
def binary_search(sorted_list, target):
"""Find target in sorted list using divide and conquer."""
# Base case: empty list
if not sorted_list:
return -1
# Divide: find middle
mid = len(sorted_list) // 2
mid_value = sorted_list[mid]
# Base case: found target
if mid_value == target:
return mid
# Conquer: search appropriate half
if target < mid_value:
# Search left half
return binary_search(sorted_list[:mid], target)
else:
# Search right half
result = binary_search(sorted_list[mid + 1:], target)
# Adjust index for right half offset
return -1 if result == -1 else mid + 1 + result
# Test
numbers = [1, 3, 5, 7, 9, 11, 13, 15]
print(binary_search(numbers, 7)) # Output: 3
print(binary_search(numbers, 10)) # Output: -1
Execution trace for binary_search([1, 3, 5, 7, 9, 11, 13, 15], 7):
binary_search([1, 3, 5, 7, 9, 11, 13, 15], 7)
ā mid=4, mid_value=9
ā 7 < 9, search left half
ā binary_search([1, 3, 5, 7], 7)
ā mid=2, mid_value=5
ā 7 > 5, search right half
ā binary_search([7], 7)
ā mid=0, mid_value=7
ā 7 == 7, found!
ā return 0
ā adjust index: 2 + 1 + 0 = 3
ā return 3
ā return 3
Result: 3
Recursion tree:
[1,3,5,7,9,11,13,15] target=7
/
[1,3,5,7] target=7
/
[7] target=7
ā
FOUND at index 0
Variation 1: Merge Sort
The classic sorting algorithm using divide and conquer:
def merge_sort(items):
"""Sort list using divide and conquer."""
# Base case: single element or empty
if len(items) <= 1:
return items
# Divide: split in half
mid = len(items) // 2
left = items[:mid]
right = items[mid:]
# Conquer: sort each half
sorted_left = merge_sort(left)
sorted_right = merge_sort(right)
# Combine: merge sorted halves
return merge(sorted_left, sorted_right)
def merge(left, right):
"""Merge two sorted lists."""
result = []
i = j = 0
# Compare elements from both lists
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
# Append remaining elements
result.extend(left[i:])
result.extend(right[j:])
return result
# Test
print(merge_sort([38, 27, 43, 3, 9, 82, 10]))
# Output: [3, 9, 10, 27, 38, 43, 82]
Recursion tree for merge_sort([38, 27, 43, 3]):
[38, 27, 43, 3]
/ \
[38, 27] [43, 3]
/ \ / \
[38] [27] [43] [3]
ā ā ā ā
[38] [27] [43] [3]
\ / \ /
[27, 38] [3, 43]
\ /
[3, 27, 38, 43]
Variation 2: Maximum Subarray Sum
Finding the maximum sum of contiguous subarray:
def max_subarray_sum(arr):
"""Find maximum sum of contiguous subarray."""
if len(arr) == 1:
return arr[0]
# Divide
mid = len(arr) // 2
left = arr[:mid]
right = arr[mid:]
# Conquer: max sum in left or right half
left_max = max_subarray_sum(left)
right_max = max_subarray_sum(right)
# Combine: max sum crossing the middle
cross_max = max_crossing_sum(arr, mid)
return max(left_max, right_max, cross_max)
def max_crossing_sum(arr, mid):
"""Find max sum of subarray crossing the midpoint."""
# Max sum ending at mid (going left)
left_sum = float('-inf')
current_sum = 0
for i in range(mid - 1, -1, -1):
current_sum += arr[i]
left_sum = max(left_sum, current_sum)
# Max sum starting at mid (going right)
right_sum = float('-inf')
current_sum = 0
for i in range(mid, len(arr)):
current_sum += arr[i]
right_sum = max(right_sum, current_sum)
return left_sum + right_sum
# Test
print(max_subarray_sum([-2, 1, -3, 4, -1, 2, 1, -5, 4]))
# Output: 6 (subarray [4, -1, 2, 1])
Complexity Characteristics
Time Complexity: Typically O(n log n) - Recursion depth: O(log n) due to halving - Work per level: O(n) for combining - Total: O(n) Ć O(log n) = O(n log n)
Space Complexity: O(log n) to O(n) - Call stack depth: O(log n) - Additional space depends on combine step - Merge sort: O(n) for temporary arrays - Binary search: O(log n) stack only
Recurrence Relation: T(n) = 2T(n/2) + O(n) - Solves to O(n log n) by Master Theorem
Optimization: Index-Based Divide and Conquer
Avoid array slicing overhead:
def binary_search_optimized(arr, target, left=0, right=None):
"""Binary search using indices instead of slicing."""
if right is None:
right = len(arr) - 1
# Base case: search space exhausted
if left > right:
return -1
# Divide: find middle index
mid = (left + right) // 2
# Base case: found target
if arr[mid] == target:
return mid
# Conquer: search appropriate half
if target < arr[mid]:
return binary_search_optimized(arr, target, left, mid - 1)
else:
return binary_search_optimized(arr, target, mid + 1, right)
# Test
numbers = [1, 3, 5, 7, 9, 11, 13, 15]
print(binary_search_optimized(numbers, 7)) # Output: 3
Improved space complexity: O(log n) call stack, no array copying.
Decision Framework
| Use Divide and Conquer When | Avoid When |
|---|---|
| Problem naturally splits into independent halves | Subproblems overlap significantly (use DP) |
| Need O(log n) recursion depth | Problem requires sequential processing |
| Combining results is efficient | Split/combine overhead dominates |
| Data is sorted or can be partitioned | Small input sizes (overhead not worth it) |
Common Pitfalls
Pitfall 1: Incorrect midpoint calculation
# Wrong: can cause integer overflow in other languages
mid = (left + right) / 2
# Correct: avoids overflow
mid = left + (right - left) // 2
# Or in Python (no overflow issue):
mid = (left + right) // 2
Pitfall 2: Off-by-one errors in range
# Wrong: infinite recursion if target not found
def broken_search(arr, target, left, right):
mid = (left + right) // 2
if arr[mid] == target:
return mid
if target < arr[mid]:
return broken_search(arr, target, left, mid) # ā Should be mid - 1
else:
return broken_search(arr, target, mid, right) # ā Should be mid + 1
Pitfall 3: Forgetting to adjust indices after recursion
# Wrong: returns index relative to subarray, not original array
def broken_binary_search(arr, target):
if not arr:
return -1
mid = len(arr) // 2
if arr[mid] == target:
return mid
if target < arr[mid]:
return broken_binary_search(arr[:mid], target)
else:
return broken_binary_search(arr[mid+1:], target) # ā Missing index adjustment
Quick Reference Template
def divide_conquer_template(data, left=0, right=None):
"""Template for Divide and Conquer pattern."""
if right is None:
right = len(data) - 1
# 1. Base case: problem small enough
if left >= right:
return solve_base_case(data, left, right)
# 2. Divide: find split point
mid = (left + right) // 2
# 3. Conquer: solve subproblems
left_result = divide_conquer_template(data, left, mid)
right_result = divide_conquer_template(data, mid + 1, right)
# 4. Combine: merge results
return combine(left_result, right_result)
Accumulator Pattern
Accumulator Pattern
The Accumulator Pattern carries intermediate results through recursive calls via an additional parameter. This pattern enables tail recursion, avoids building up operations on the call stack, and often leads to more efficient implementations.
Core Structure
The pattern adds an accumulator parameter that builds the result incrementally:
def accumulator_recursion(items, accumulator=initial_value):
# Base case: no more items to process
if not items:
return accumulator
# Process first item, update accumulator
first = items[0]
rest = items[1:]
updated_accumulator = update(accumulator, first)
# Recurse with updated accumulator
return accumulator_recursion(rest, updated_accumulator)
When to Use
- Tail recursion: Last operation is the recursive call (no pending operations)
- Building results incrementally: Accumulating sum, product, list, etc.
- Avoiding stack buildup: Result computed during descent, not ascent
- State threading: Passing state through recursive calls
Reference Implementation: Sum with Accumulator
Let's contrast the standard approach with the accumulator pattern:
# Standard First + Rest (builds up operations on stack)
def sum_standard(numbers):
"""Sum using standard recursion."""
if not numbers:
return 0
return numbers[0] + sum_standard(numbers[1:])
# Accumulator Pattern (tail recursive)
def sum_accumulator(numbers, acc=0):
"""Sum using accumulator pattern."""
if not numbers:
return acc
return sum_accumulator(numbers[1:], acc + numbers[0])
# Test both
print(sum_standard([1, 2, 3, 4, 5])) # Output: 15
print(sum_accumulator([1, 2, 3, 4, 5])) # Output: 15
Execution trace comparison:
Standard recursion (operations build up):
sum_standard([1, 2, 3])
ā return 1 + sum_standard([2, 3])
ā return 2 + sum_standard([3])
ā return 3 + sum_standard([])
ā return 0
ā 3 + 0 = 3
ā 2 + 3 = 5
ā 1 + 5 = 6
Accumulator pattern (result computed during descent):
sum_accumulator([1, 2, 3], acc=0)
ā sum_accumulator([2, 3], acc=1)
ā sum_accumulator([3], acc=3)
ā sum_accumulator([], acc=6)
ā return 6
Notice: With accumulator, the result is ready when we hit the base case. No operations pending on the stack.
Variation 1: Reverse List
Building a new list in reverse order:
def reverse_list(items, acc=None):
"""Reverse a list using accumulator."""
if acc is None:
acc = []
# Base case: no more items
if not items:
return acc
# Add first item to front of accumulator
first = items[0]
rest = items[1:]
return reverse_list(rest, [first] + acc)
print(reverse_list([1, 2, 3, 4, 5])) # Output: [5, 4, 3, 2, 1]
Execution trace:
reverse_list([1, 2, 3], acc=[])
ā reverse_list([2, 3], acc=[1])
ā reverse_list([3], acc=[2, 1])
ā reverse_list([], acc=[3, 2, 1])
ā return [3, 2, 1]
Variation 2: Factorial with Accumulator
Computing factorial tail-recursively:
def factorial_accumulator(n, acc=1):
"""Factorial using accumulator pattern."""
# Base case
if n <= 1:
return acc
# Multiply current n into accumulator
return factorial_accumulator(n - 1, acc * n)
print(factorial_accumulator(5)) # Output: 120
Execution trace for factorial_accumulator(5):
factorial_accumulator(5, acc=1)
ā factorial_accumulator(4, acc=5)
ā factorial_accumulator(3, acc=20)
ā factorial_accumulator(2, acc=60)
ā factorial_accumulator(1, acc=120)
ā return 120
Variation 3: Range Generation
Building a list of numbers:
def range_recursive(start, end, acc=None):
"""Generate range [start, end) using accumulator."""
if acc is None:
acc = []
# Base case: reached end
if start >= end:
return acc
# Add current value, continue
return range_recursive(start + 1, end, acc + [start])
print(range_recursive(1, 6)) # Output: [1, 2, 3, 4, 5]
Variation 4: String Building
Accumulating characters into a string:
def join_with_separator(items, separator=", ", acc=""):
"""Join items with separator using accumulator."""
# Base case: no more items
if not items:
return acc.rstrip(separator + " ") # Remove trailing separator
# Add first item with separator
first = str(items[0])
rest = items[1:]
updated_acc = acc + first + separator + " "
return join_with_separator(rest, separator, updated_acc)
print(join_with_separator([1, 2, 3, 4])) # Output: "1, 2, 3, 4"
Complexity Characteristics
Time Complexity: Same as non-accumulator version - Still O(n) for linear processing - No performance advantage in Python (no TCO)
Space Complexity: Theoretically O(1) with tail call optimization - In languages with TCO: O(1) stack space - In Python: Still O(n) stack space (no TCO) - Advantage: Clearer intent, easier to reason about
Key Insight: Python does not optimize tail calls, so accumulator pattern doesn't reduce stack depth. However, it still provides benefits: - Clearer logic (result built incrementally) - Easier to convert to iteration - Better expresses intent
Converting Accumulator to Iteration
Accumulator pattern naturally converts to a loop:
# Recursive accumulator
def sum_recursive(numbers, acc=0):
if not numbers:
return acc
return sum_recursive(numbers[1:], acc + numbers[0])
# Equivalent iteration
def sum_iterative(numbers):
acc = 0
for num in numbers:
acc += num
return acc
# Both produce same result
print(sum_recursive([1, 2, 3, 4, 5])) # Output: 15
print(sum_iterative([1, 2, 3, 4, 5])) # Output: 15
Decision Framework
| Use Accumulator Pattern When | Avoid When |
|---|---|
| Building result incrementally | Need to process result after recursion |
| Want tail-recursive form | Multiple recursive calls needed |
| Planning to convert to iteration | Divide-and-conquer structure |
| Threading state through calls | Accumulator logic is complex |
Common Pitfalls
Pitfall 1: Forgetting default accumulator value
# Wrong: accumulator required on first call
def broken_sum(numbers, acc): # ā No default value
if not numbers:
return acc
return broken_sum(numbers[1:], acc + numbers[0])
# Must call with: broken_sum([1, 2, 3], 0)
# Better: provide default
def fixed_sum(numbers, acc=0): # ā Default value
# ...
Pitfall 2: Wrong accumulator initial value
# Wrong: product should start at 1, not 0
def broken_product(numbers, acc=0): # ā Wrong initial value
if not numbers:
return acc
return broken_product(numbers[1:], acc * numbers[0])
print(broken_product([2, 3, 4])) # Output: 0 (wrong!)
# Correct: multiplicative identity is 1
def fixed_product(numbers, acc=1):
if not numbers:
return acc
return broken_product(numbers[1:], acc * numbers[0])
print(fixed_product([2, 3, 4])) # Output: 24 (correct!)
Pitfall 3: Mutating accumulator instead of creating new value
# Dangerous: mutating shared list
def broken_collect(items, acc=[]): # ā Mutable default argument!
if not items:
return acc
acc.append(items[0]) # ā Mutates shared list
return broken_collect(items[1:], acc)
# First call works
print(broken_collect([1, 2, 3])) # Output: [1, 2, 3]
# Second call has leftover data!
print(broken_collect([4, 5])) # Output: [1, 2, 3, 4, 5] (wrong!)
# Correct: use None and create new list
def fixed_collect(items, acc=None):
if acc is None:
acc = []
if not items:
return acc
return fixed_collect(items[1:], acc + [items[0]]) # ā Creates new list
Quick Reference Template
def accumulator_template(items, acc=initial_value):
"""Template for Accumulator pattern."""
# Handle mutable default argument
if acc is None:
acc = create_initial_accumulator()
# 1. Base case: no more items
if not items:
return acc
# 2. Process first item
first = items[0]
rest = items[1:]
# 3. Update accumulator
updated_acc = update_accumulator(acc, first)
# 4. Tail recursive call
return accumulator_template(rest, updated_acc)
Tail Recursion Identification Checklist
A function is tail-recursive if: - ā The recursive call is the last operation - ā No operations pending after recursive call returns - ā Return value is directly the result of recursive call - ā No arithmetic, concatenation, or function calls after recursion
Tail recursive (accumulator pattern):
def tail_recursive(n, acc=0):
if n == 0:
return acc
return tail_recursive(n - 1, acc + n) # ā Last operation
Not tail recursive (standard recursion):
def not_tail_recursive(n):
if n == 0:
return 0
return n + not_tail_recursive(n - 1) # ā Addition after recursion
Multiple Base Cases Pattern
Multiple Base Cases Pattern
Many recursive problems require multiple base cases to handle different termination conditions. This pattern recognizes that recursion may stop for various reasons, each requiring a different return value.
Core Structure
The pattern checks multiple stopping conditions before recursing:
def multiple_base_cases(data):
# Base case 1: first stopping condition
if condition_1(data):
return result_1
# Base case 2: second stopping condition
if condition_2(data):
return result_2
# Base case 3: third stopping condition (if needed)
if condition_3(data):
return result_3
# Recursive case: problem not yet solved
return combine(
process(data),
multiple_base_cases(reduce(data))
)
When to Use
- Multiple termination conditions: Different ways recursion can stop
- Edge cases: Empty input, single element, special values
- Boundary conditions: Upper/lower bounds, null values
- Early returns: Optimization to avoid unnecessary recursion
Reference Implementation: Fibonacci
The classic example with two base cases:
def fibonacci(n):
"""Calculate nth Fibonacci number."""
# Base case 1: F(0) = 0
if n == 0:
return 0
# Base case 2: F(1) = 1
if n == 1:
return 1
# Recursive case: F(n) = F(n-1) + F(n-2)
return fibonacci(n - 1) + fibonacci(n - 2)
# Test
print(fibonacci(0)) # Output: 0
print(fibonacci(1)) # Output: 1
print(fibonacci(6)) # Output: 8
Why two base cases?
Without both base cases, we get incorrect results or infinite recursion:
Missing F(0) base case:
def broken_fib(n):
if n == 1:
return 1
return broken_fib(n - 1) + broken_fib(n - 2)
# broken_fib(2) ā broken_fib(1) + broken_fib(0)
# ā 1 + broken_fib(0)
# ā 1 + (broken_fib(-1) + broken_fib(-2))
# ā infinite recursion!
Recursion tree for fibonacci(5):
fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)=1
/ \ / \ / \
fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)
/ \ =1 =1 =0 =1 =0
fib(1) fib(0)
=1 =0
Variation 1: Binary Search with Multiple Exits
Binary search has three base cases:
def binary_search(arr, target, left=0, right=None):
"""Binary search with multiple base cases."""
if right is None:
right = len(arr) - 1
# Base case 1: search space exhausted (not found)
if left > right:
return -1
mid = (left + right) // 2
# Base case 2: found target
if arr[mid] == target:
return mid
# Base case 3: target in left half
if target < arr[mid]:
return binary_search(arr, target, left, mid - 1)
# Base case 4: target in right half
else:
return binary_search(arr, target, mid + 1, right)
# Test
numbers = [1, 3, 5, 7, 9, 11, 13]
print(binary_search(numbers, 7)) # Output: 3 (found)
print(binary_search(numbers, 8)) # Output: -1 (not found)
Variation 2: List Maximum with Edge Cases
Finding maximum requires handling empty and single-element lists:
def find_max(numbers):
"""Find maximum with multiple base cases."""
# Base case 1: empty list (error condition)
if len(numbers) == 0:
raise ValueError("Cannot find max of empty list")
# Base case 2: single element is the max
if len(numbers) == 1:
return numbers[0]
# Recursive case: compare first with max of rest
first = numbers[0]
max_of_rest = find_max(numbers[1:])
return first if first > max_of_rest else max_of_rest
# Test
print(find_max([5])) # Output: 5
print(find_max([3, 1, 4, 1])) # Output: 4
# find_max([]) # Raises ValueError
Variation 3: Tree Height
Tree height calculation has multiple base cases:
class TreeNode:
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
def tree_height(node):
"""Calculate tree height with multiple base cases."""
# Base case 1: null node has height -1
if node is None:
return -1
# Base case 2: leaf node has height 0
if node.left is None and node.right is None:
return 0
# Recursive case: 1 + max height of subtrees
left_height = tree_height(node.left)
right_height = tree_height(node.right)
return 1 + max(left_height, right_height)
# Test
# 1
# / \
# 2 3
# /
# 4
root = TreeNode(1,
TreeNode(2, TreeNode(4)),
TreeNode(3)
)
print(tree_height(root)) # Output: 2
print(tree_height(None)) # Output: -1
Why two base cases for tree height?
Nonenode: Represents absence of subtree, height = -1- Leaf node: Has no children, height = 0
- Internal node: Height = 1 + max(left_height, right_height)
Without the leaf base case, we'd still get correct results, but with the leaf check we avoid unnecessary recursive calls.
Variation 4: String Palindrome Check
Palindrome checking has multiple stopping conditions:
def is_palindrome(s):
"""Check if string is palindrome with multiple base cases."""
# Base case 1: empty string is palindrome
if len(s) == 0:
return True
# Base case 2: single character is palindrome
if len(s) == 1:
return True
# Base case 3: first and last don't match
if s[0] != s[-1]:
return False
# Recursive case: check middle substring
return is_palindrome(s[1:-1])
# Test
print(is_palindrome("")) # Output: True
print(is_palindrome("a")) # Output: True
print(is_palindrome("racecar")) # Output: True
print(is_palindrome("hello")) # Output: False
Execution trace for is_palindrome("racecar"):
is_palindrome("racecar")
ā "r" == "r"? Yes
ā is_palindrome("aceca")
ā "a" == "a"? Yes
ā is_palindrome("cec")
ā "c" == "c"? Yes
ā is_palindrome("e")
ā len("e") == 1, base case
ā return True
ā return True
ā return True
ā return True
Variation 5: GCD with Multiple Base Cases
Greatest Common Divisor using Euclidean algorithm:
def gcd(a, b):
"""Calculate GCD with multiple base cases."""
# Base case 1: if b is 0, GCD is a
if b == 0:
return a
# Base case 2: if a is 0, GCD is b
if a == 0:
return b
# Base case 3: if equal, GCD is the number
if a == b:
return a
# Recursive case: GCD(a, b) = GCD(b, a mod b)
return gcd(b, a % b)
# Test
print(gcd(48, 18)) # Output: 6
print(gcd(100, 0)) # Output: 100
print(gcd(0, 50)) # Output: 50
print(gcd(7, 7)) # Output: 7
Complexity Characteristics
Time Complexity: Depends on problem - Multiple base cases don't change asymptotic complexity - May provide early termination optimization - Example: Fibonacci still O(2^n) despite two base cases
Space Complexity: Depends on recursion depth - Base cases reduce maximum depth - Example: Palindrome check reduces depth by 2 per call
Decision Framework
| Use Multiple Base Cases When | Avoid When |
|---|---|
| Different termination conditions exist | Single base case suffices |
| Edge cases need special handling | Complicates logic unnecessarily |
| Early termination possible | All paths need same processing |
| Null/empty/single-element cases | Base cases can be combined |
Common Pitfalls
Pitfall 1: Missing a necessary base case
# Wrong: missing n=0 base case
def broken_fib(n):
if n == 1:
return 1
return broken_fib(n - 1) + broken_fib(n - 2)
# broken_fib(2) leads to broken_fib(0), then broken_fib(-1), infinite recursion!
Pitfall 2: Overlapping base cases
# Redundant: second base case never reached
def redundant_fib(n):
if n <= 1:
return n
if n == 1: # ā Never reached! Already handled above
return 1
return redundant_fib(n - 1) + redundant_fib(n - 2)
Pitfall 3: Wrong order of base case checks
# Wrong: checks n == 1 before n <= 0
def broken_factorial(n):
if n == 1:
return 1
if n <= 0: # ā Should be checked first!
return 1
return n * broken_factorial(n - 1)
# broken_factorial(0) returns 1 (correct by luck)
# But broken_factorial(-5) causes infinite recursion!
Pitfall 4: Inconsistent return types
# Wrong: returns different types from different base cases
def broken_search(arr, target):
if not arr:
return None # ā Returns None
if arr[0] == target:
return 0 # ā Returns int
result = broken_search(arr[1:], target)
return result + 1 if result else None # ā Mixing types causes bugs
Base Case Design Checklist
When designing base cases, ask:
- What are all the ways recursion can terminate?
- Empty input
- Single element
- Target found
- Search space exhausted
-
Special values (0, None, etc.)
-
What should each termination return?
- Identity value (0 for sum, 1 for product, [] for list)
- Sentinel value (-1 for "not found", None for "no result")
- The input itself (single element is the answer)
-
Error/exception (invalid input)
-
Are base cases mutually exclusive?
- Check for overlapping conditions
-
Order base cases from most specific to most general
-
Do base cases cover all edge cases?
- Empty input
- Single element
- Negative numbers
- Null/None values
Quick Reference Template
def multiple_base_cases_template(data):
"""Template for Multiple Base Cases pattern."""
# Base case 1: most specific condition
if specific_condition(data):
return specific_result
# Base case 2: general stopping condition
if general_condition(data):
return general_result
# Base case 3: error/edge case
if error_condition(data):
raise ValueError("Invalid input")
# Recursive case: problem not yet solved
return combine(
process(data),
multiple_base_cases_template(reduce(data))
)
Pattern Recognition Guide
Fibonacci-style (two base cases for two recursive calls):
if n == 0: return base_0
if n == 1: return base_1
return recurse(n-1) + recurse(n-2)
Search-style (found vs not found):
if not_found_condition: return -1
if found_condition: return result
return recurse(smaller_problem)
Tree-style (null vs leaf vs internal):
if node is None: return null_result
if is_leaf(node): return leaf_result
return combine(recurse(left), recurse(right))
Validation-style (early failure vs success):
if invalid_condition: return False
if base_success: return True
return recurse(smaller_problem)
Tree Traversal Templates
Tree Traversal Templates
Tree traversal is the quintessential recursive problem. Trees are inherently recursive structures (a tree is a node with subtrees), making recursion the natural approach. This section provides battle-tested templates for all common tree traversal patterns.
Tree Node Definition
Standard binary tree node structure:
class TreeNode:
"""Standard binary tree node."""
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
def __repr__(self):
return f"TreeNode({self.value})"
Core Traversal Patterns
Three fundamental depth-first traversals differ only in when the node is processed relative to its children:
- Preorder: Process node before children (Root ā Left ā Right)
- Inorder: Process node between children (Left ā Root ā Right)
- Postorder: Process node after children (Left ā Right ā Root)
Reference Implementation: Preorder Traversal
Process the current node, then recursively traverse left and right subtrees:
def preorder_traversal(node, result=None):
"""Preorder: Root ā Left ā Right"""
if result is None:
result = []
# Base case: null node
if node is None:
return result
# Process current node FIRST
result.append(node.value)
# Then traverse left subtree
preorder_traversal(node.left, result)
# Then traverse right subtree
preorder_traversal(node.right, result)
return result
# Test tree:
# 1
# / \
# 2 3
# / \
# 4 5
root = TreeNode(1,
TreeNode(2, TreeNode(4), TreeNode(5)),
TreeNode(3)
)
print(preorder_traversal(root)) # Output: [1, 2, 4, 5, 3]
Execution trace:
preorder(1)
ā process 1 ā [1]
ā preorder(2)
ā process 2 ā [1, 2]
ā preorder(4)
ā process 4 ā [1, 2, 4]
ā preorder(None) ā left child
ā preorder(None) ā right child
ā preorder(5)
ā process 5 ā [1, 2, 4, 5]
ā preorder(None) ā left child
ā preorder(None) ā right child
ā preorder(3)
ā process 3 ā [1, 2, 4, 5, 3]
ā preorder(None) ā left child
ā preorder(None) ā right child
Use cases for preorder: - Copying a tree - Prefix expression evaluation - Creating a tree from serialized data - File system traversal (process directory before contents)
Variation 1: Inorder Traversal
Process left subtree, then current node, then right subtree:
def inorder_traversal(node, result=None):
"""Inorder: Left ā Root ā Right"""
if result is None:
result = []
# Base case: null node
if node is None:
return result
# First traverse left subtree
inorder_traversal(node.left, result)
# Then process current node
result.append(node.value)
# Then traverse right subtree
inorder_traversal(node.right, result)
return result
# Using same test tree
print(inorder_traversal(root)) # Output: [4, 2, 5, 1, 3]
Key insight: For a Binary Search Tree, inorder traversal produces values in sorted order.
Use cases for inorder: - Getting sorted values from BST - Infix expression evaluation - Validating BST property - Finding kth smallest element
Variation 2: Postorder Traversal
Process both subtrees before processing current node:
def postorder_traversal(node, result=None):
"""Postorder: Left ā Right ā Root"""
if result is None:
result = []
# Base case: null node
if node is None:
return result
# First traverse left subtree
postorder_traversal(node.left, result)
# Then traverse right subtree
postorder_traversal(node.right, result)
# Finally process current node
result.append(node.value)
return result
# Using same test tree
print(postorder_traversal(root)) # Output: [4, 5, 2, 3, 1]
Use cases for postorder: - Deleting a tree (delete children before parent) - Calculating directory sizes (need child sizes first) - Postfix expression evaluation - Freeing memory in manual memory management
Variation 3: Level-Order Traversal (BFS)
Process nodes level by level, left to right:
from collections import deque
def level_order_traversal(root):
"""Level-order: Breadth-first traversal"""
if root is None:
return []
result = []
queue = deque([root])
while queue:
node = queue.popleft()
result.append(node.value)
# Add children to queue
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
return result
# Using same test tree
print(level_order_traversal(root)) # Output: [1, 2, 3, 4, 5]
Note: Level-order is typically implemented iteratively with a queue, not recursively. However, recursive level-order is possible:
def level_order_recursive(root):
"""Recursive level-order traversal."""
def get_height(node):
if node is None:
return 0
return 1 + max(get_height(node.left), get_height(node.right))
def get_level(node, level, result):
if node is None:
return
if level == 1:
result.append(node.value)
else:
get_level(node.left, level - 1, result)
get_level(node.right, level - 1, result)
result = []
height = get_height(root)
for level in range(1, height + 1):
get_level(root, level, result)
return result
print(level_order_recursive(root)) # Output: [1, 2, 3, 4, 5]
Tree Property Calculations
Common recursive tree computations:
Height/Depth
def tree_height(node):
"""Calculate height of tree."""
# Base case: null node
if node is None:
return -1 # Height of empty tree is -1
# Base case: leaf node
if node.left is None and node.right is None:
return 0
# Recursive case: 1 + max height of subtrees
left_height = tree_height(node.left)
right_height = tree_height(node.right)
return 1 + max(left_height, right_height)
print(tree_height(root)) # Output: 2
Node Count
def count_nodes(node):
"""Count total nodes in tree."""
# Base case: null node
if node is None:
return 0
# Recursive case: 1 (current) + left count + right count
return 1 + count_nodes(node.left) + count_nodes(node.right)
print(count_nodes(root)) # Output: 5
Leaf Count
def count_leaves(node):
"""Count leaf nodes in tree."""
# Base case: null node
if node is None:
return 0
# Base case: leaf node
if node.left is None and node.right is None:
return 1
# Recursive case: sum of leaves in subtrees
return count_leaves(node.left) + count_leaves(node.right)
print(count_leaves(root)) # Output: 3 (nodes 4, 5, 3)
Sum of Values
def sum_tree(node):
"""Sum all values in tree."""
# Base case: null node
if node is None:
return 0
# Recursive case: current + left sum + right sum
return node.value + sum_tree(node.left) + sum_tree(node.right)
print(sum_tree(root)) # Output: 15 (1+2+3+4+5)
Tree Search Patterns
Finding a Value
def find_node(node, target):
"""Find node with target value."""
# Base case: null node (not found)
if node is None:
return None
# Base case: found target
if node.value == target:
return node
# Recursive case: search left, then right
left_result = find_node(node.left, target)
if left_result:
return left_result
return find_node(node.right, target)
found = find_node(root, 5)
print(found) # Output: TreeNode(5)
Path to Node
def find_path(node, target, path=None):
"""Find path from root to target node."""
if path is None:
path = []
# Base case: null node
if node is None:
return None
# Add current node to path
path.append(node.value)
# Base case: found target
if node.value == target:
return path
# Recursive case: search left subtree
left_path = find_path(node.left, target, path.copy())
if left_path:
return left_path
# Search right subtree
right_path = find_path(node.right, target, path.copy())
if right_path:
return right_path
# Not found in this subtree
return None
print(find_path(root, 5)) # Output: [1, 2, 5]
Tree Validation
Binary Search Tree Validation
def is_valid_bst(node, min_val=float('-inf'), max_val=float('inf')):
"""Validate if tree is a valid BST."""
# Base case: null node is valid
if node is None:
return True
# Check current node's value is in valid range
if node.value <= min_val or node.value >= max_val:
return False
# Recursively validate subtrees with updated ranges
left_valid = is_valid_bst(node.left, min_val, node.value)
right_valid = is_valid_bst(node.right, node.value, max_val)
return left_valid and right_valid
# Test with BST:
# 5
# / \
# 3 7
# / \
# 2 4
bst = TreeNode(5,
TreeNode(3, TreeNode(2), TreeNode(4)),
TreeNode(7)
)
print(is_valid_bst(bst)) # Output: True
print(is_valid_bst(root)) # Output: False (not a BST)
Tree Modification Patterns
Mirroring a Tree
def mirror_tree(node):
"""Create mirror image of tree."""
# Base case: null node
if node is None:
return None
# Recursive case: swap left and right subtrees
node.left, node.right = node.right, node.left
# Recursively mirror subtrees
mirror_tree(node.left)
mirror_tree(node.right)
return node
# Mirror the tree
mirrored = mirror_tree(root)
print(preorder_traversal(mirrored)) # Output: [1, 3, 2, 5, 4]
Cloning a Tree
def clone_tree(node):
"""Create deep copy of tree."""
# Base case: null node
if node is None:
return None
# Create new node with same value
new_node = TreeNode(node.value)
# Recursively clone subtrees
new_node.left = clone_tree(node.left)
new_node.right = clone_tree(node.right)
return new_node
cloned = clone_tree(root)
print(preorder_traversal(cloned)) # Output: [1, 2, 4, 5, 3]
Complexity Analysis
Time Complexity: O(n) for all traversals - Each node visited exactly once - Constant work per node
Space Complexity: O(h) where h is height - Call stack depth equals tree height - Best case (balanced): O(log n) - Worst case (skewed): O(n)
Decision Framework
| Traversal | When to Use | Key Property |
|---|---|---|
| Preorder | Copy tree, serialize, prefix expressions | Process parent before children |
| Inorder | BST sorted output, infix expressions | Process left, parent, right |
| Postorder | Delete tree, calculate sizes, postfix | Process children before parent |
| Level-order | Level-by-level processing, shortest path | Process by depth |
Common Pitfalls
Pitfall 1: Forgetting null check
# Wrong: crashes on null node
def broken_traversal(node):
result = [node.value] # ā Crashes if node is None
result.extend(broken_traversal(node.left))
result.extend(broken_traversal(node.right))
return result
# Correct: check for null first
def fixed_traversal(node):
if node is None: # ā Always check first
return []
# ...
Pitfall 2: Modifying shared result list incorrectly
# Wrong: doesn't accumulate results properly
def broken_inorder(node, result=[]): # ā Mutable default argument!
if node is None:
return result
broken_inorder(node.left, result)
result.append(node.value)
broken_inorder(node.right, result)
return result
# First call works, but second call has leftover data!
Pitfall 3: Wrong traversal order
# Wrong: claims to be inorder but is actually preorder
def broken_inorder(node, result=None):
if result is None:
result = []
if node is None:
return result
result.append(node.value) # ā Processing before left subtree!
broken_inorder(node.left, result)
broken_inorder(node.right, result)
return result
Quick Reference Templates
Generic DFS Template:
def dfs_template(node, result=None):
"""Generic depth-first traversal template."""
if result is None:
result = []
# Base case: null node
if node is None:
return result
# PREORDER: process here
# result.append(node.value)
# Traverse left
dfs_template(node.left, result)
# INORDER: process here
# result.append(node.value)
# Traverse right
dfs_template(node.right, result)
# POSTORDER: process here
# result.append(node.value)
return result
Tree Property Template:
def tree_property_template(node):
"""Template for calculating tree properties."""
# Base case: null node
if node is None:
return null_value
# Base case: leaf node (optional optimization)
if node.left is None and node.right is None:
return leaf_value
# Recursive case: combine results from subtrees
left_result = tree_property_template(node.left)
right_result = tree_property_template(node.right)
return combine(node.value, left_result, right_result)
Tree Search Template:
def tree_search_template(node, target):
"""Template for searching in tree."""
# Base case: null node (not found)
if node is None:
return None
# Base case: found target
if matches(node, target):
return node
# Recursive case: search subtrees
left_result = tree_search_template(node.left, target)
if left_result:
return left_result
return tree_search_template(node.right, target)
Backtracking Framework
Backtracking Framework
Backtracking is a systematic way to explore all possible solutions by building candidates incrementally and abandoning them ("backtracking") as soon as it's determined they cannot lead to a valid solution. It's the foundation for solving constraint satisfaction problems, generating permutations, and exploring decision trees.
Core Structure
The backtracking pattern follows a generate-test-recurse-undo cycle:
def backtrack(state, choices, constraints):
# Base case: solution complete
if is_solution(state):
record_solution(state)
return
# Generate candidates
for choice in get_candidates(choices, state):
# Test: is this choice valid?
if is_valid(choice, state, constraints):
# Make choice (modify state)
make_choice(state, choice)
# Recurse with updated state
backtrack(state, choices, constraints)
# Undo choice (backtrack)
undo_choice(state, choice)
When to Use
- Exploring solution spaces: Need to try all possibilities
- Constraint satisfaction: Sudoku, N-Queens, graph coloring
- Combinatorial problems: Permutations, combinations, subsets
- Path finding: Mazes, puzzles with multiple paths
- Optimization: Finding best solution among many candidates
Reference Implementation: Generate All Subsets
Let's establish the canonical backtracking example:
def generate_subsets(nums):
"""Generate all subsets of a list using backtracking."""
result = []
def backtrack(start, current_subset):
# Base case: we've made a decision for all elements
# Record current subset (it's a valid solution)
result.append(current_subset.copy())
# Generate candidates: remaining elements
for i in range(start, len(nums)):
# Make choice: include nums[i]
current_subset.append(nums[i])
# Recurse: make decisions for remaining elements
backtrack(i + 1, current_subset)
# Undo choice: remove nums[i] (backtrack)
current_subset.pop()
backtrack(0, [])
return result
# Test
print(generate_subsets([1, 2, 3]))
# Output: [[], [1], [1, 2], [1, 2, 3], [1, 3], [2], [2, 3], [3]]
Execution trace for generate_subsets([1, 2]):
backtrack(0, [])
ā record []
ā try including 1
ā backtrack(1, [1])
ā record [1]
ā try including 2
ā backtrack(2, [1, 2])
ā record [1, 2]
ā no more elements
ā undo: remove 2 ā [1]
ā undo: remove 1 ā []
ā try including 2
ā backtrack(1, [2])
ā record [2]
ā no more elements
ā undo: remove 2 ā []
Decision tree:
[]
/ \
[1] [2]
/
[1,2]
Variation 1: Generate All Permutations
Exploring all orderings of elements:
def generate_permutations(nums):
"""Generate all permutations using backtracking."""
result = []
def backtrack(current_perm, remaining):
# Base case: no more elements to add
if not remaining:
result.append(current_perm.copy())
return
# Try each remaining element in current position
for i in range(len(remaining)):
# Make choice: pick remaining[i]
current_perm.append(remaining[i])
# Recurse with remaining elements (excluding chosen one)
new_remaining = remaining[:i] + remaining[i+1:]
backtrack(current_perm, new_remaining)
# Undo choice: remove last element
current_perm.pop()
backtrack([], nums)
return result
# Test
print(generate_permutations([1, 2, 3]))
# Output: [[1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1]]
Decision tree for generate_permutations([1, 2, 3]):
[]
/ | \
[1] [2] [3]
/ \ / \ / \
[1,2] [1,3] [2,1] [2,3] [3,1] [3,2]
| | | | | |
[1,2,3][1,3,2][2,1,3][2,3,1][3,1,2][3,2,1]
Variation 2: N-Queens Problem
Classic constraint satisfaction problem:
def solve_n_queens(n):
"""Solve N-Queens problem using backtracking."""
result = []
board = [['.'] * n for _ in range(n)]
def is_safe(row, col):
"""Check if placing queen at (row, col) is safe."""
# Check column
for i in range(row):
if board[i][col] == 'Q':
return False
# Check diagonal (top-left to bottom-right)
i, j = row - 1, col - 1
while i >= 0 and j >= 0:
if board[i][j] == 'Q':
return False
i -= 1
j -= 1
# Check diagonal (top-right to bottom-left)
i, j = row - 1, col + 1
while i >= 0 and j < n:
if board[i][j] == 'Q':
return False
i -= 1
j += 1
return True
def backtrack(row):
# Base case: placed queens in all rows
if row == n:
result.append([''.join(row) for row in board])
return
# Try placing queen in each column of current row
for col in range(n):
if is_safe(row, col):
# Make choice: place queen
board[row][col] = 'Q'
# Recurse: place queens in remaining rows
backtrack(row + 1)
# Undo choice: remove queen
board[row][col] = '.'
backtrack(0)
return result
# Test
solutions = solve_n_queens(4)
print(f"Found {len(solutions)} solutions for 4-Queens")
for solution in solutions:
for row in solution:
print(row)
print()
# Output: 2 solutions
# .Q.. ..Q.
# ...Q Q...
# Q... ...Q
# ..Q. .Q..
Variation 3: Sudoku Solver
Filling a 9Ć9 grid with constraints:
def solve_sudoku(board):
"""Solve Sudoku puzzle using backtracking."""
def is_valid(row, col, num):
"""Check if placing num at (row, col) is valid."""
# Check row
if num in board[row]:
return False
# Check column
if num in [board[i][col] for i in range(9)]:
return False
# Check 3x3 box
box_row, box_col = 3 * (row // 3), 3 * (col // 3)
for i in range(box_row, box_row + 3):
for j in range(box_col, box_col + 3):
if board[i][j] == num:
return False
return True
def backtrack():
# Find next empty cell
for row in range(9):
for col in range(9):
if board[row][col] == '.':
# Try each digit 1-9
for num in '123456789':
if is_valid(row, col, num):
# Make choice: place digit
board[row][col] = num
# Recurse: solve rest of board
if backtrack():
return True
# Undo choice: remove digit
board[row][col] = '.'
# No valid digit found, backtrack
return False
# No empty cells left, puzzle solved
return True
backtrack()
return board
# Test with simple puzzle
puzzle = [
['5','3','.','.','7','.','.','.','.'],
['6','.','.','1','9','5','.','.','.'],
['.','9','8','.','.','.','.','6','.'],
['8','.','.','.','6','.','.','.','3'],
['4','.','.','8','.','3','.','.','1'],
['7','.','.','.','2','.','.','.','6'],
['.','6','.','.','.','.','2','8','.'],
['.','.','.','4','1','9','.','.','5'],
['.','.','.','.','8','.','.','7','9']
]
solve_sudoku(puzzle)
# Board is now solved (modifies in place)
Variation 4: Combination Sum
Finding all combinations that sum to target:
def combination_sum(candidates, target):
"""Find all combinations that sum to target."""
result = []
def backtrack(start, current_combo, current_sum):
# Base case: found valid combination
if current_sum == target:
result.append(current_combo.copy())
return
# Base case: exceeded target (pruning)
if current_sum > target:
return
# Try each candidate starting from 'start'
for i in range(start, len(candidates)):
# Make choice: include candidates[i]
current_combo.append(candidates[i])
# Recurse: can reuse same element (start=i, not i+1)
backtrack(i, current_combo, current_sum + candidates[i])
# Undo choice: remove last element
current_combo.pop()
backtrack(0, [], 0)
return result
# Test
print(combination_sum([2, 3, 6, 7], 7))
# Output: [[2, 2, 3], [7]]
Variation 5: Word Search in Grid
Finding if word exists in 2D grid:
def word_search(board, word):
"""Find if word exists in grid using backtracking."""
rows, cols = len(board), len(board[0])
def backtrack(row, col, index):
# Base case: found entire word
if index == len(word):
return True
# Boundary checks
if (row < 0 or row >= rows or
col < 0 or col >= cols or
board[row][col] != word[index]):
return False
# Make choice: mark cell as visited
temp = board[row][col]
board[row][col] = '#'
# Recurse: try all 4 directions
found = (backtrack(row + 1, col, index + 1) or
backtrack(row - 1, col, index + 1) or
backtrack(row, col + 1, index + 1) or
backtrack(row, col - 1, index + 1))
# Undo choice: restore cell
board[row][col] = temp
return found
# Try starting from each cell
for row in range(rows):
for col in range(cols):
if backtrack(row, col, 0):
return True
return False
# Test
grid = [
['A','B','C','E'],
['S','F','C','S'],
['A','D','E','E']
]
print(word_search(grid, "ABCCED")) # Output: True
print(word_search(grid, "SEE")) # Output: True
print(word_search(grid, "ABCB")) # Output: False
Complexity Characteristics
Time Complexity: Typically exponential - Worst case: O(b^d) where b = branching factor, d = depth - Permutations: O(n!) - Subsets: O(2^n) - N-Queens: O(n!)
Space Complexity: O(d) where d = maximum depth - Call stack depth - Plus space for storing current state
Key Optimization: Pruning - Abandon branches early when constraints violated - Can dramatically reduce actual runtime - Example: N-Queens checks safety before recursing
Backtracking vs Brute Force
Brute Force: Generate all possibilities, then filter
# Generate all, then filter
all_perms = generate_all_permutations(nums)
valid = [p for p in all_perms if is_valid(p)]
Backtracking: Prune invalid branches during generation
# Prune during generation
def backtrack(state):
if not is_valid(state):
return # ā Prune early!
# Continue only with valid states
Decision Framework
| Use Backtracking When | Avoid When |
|---|---|
| Need all solutions | Only need one solution (use greedy/DP) |
| Constraints can prune search space | No constraints to exploit |
| Solution built incrementally | Need global optimization |
| Exploring decision trees | Problem has optimal substructure (use DP) |
Common Pitfalls
Pitfall 1: Forgetting to undo choice
# Wrong: doesn't backtrack
def broken_backtrack(state):
for choice in choices:
state.append(choice) # ā Make choice
broken_backtrack(state)
# Missing: state.pop() ā Undo choice!
Pitfall 2: Modifying shared state incorrectly
# Wrong: all solutions share same list
def broken_subsets(nums):
result = []
current = []
def backtrack(start):
result.append(current) # ā Appends reference, not copy!
for i in range(start, len(nums)):
current.append(nums[i])
backtrack(i + 1)
current.pop()
backtrack(0)
return result
# All subsets will be empty! Need: result.append(current.copy())
Pitfall 3: Not pruning invalid branches
# Inefficient: explores invalid branches
def slow_n_queens(n):
def backtrack(row):
if row == n:
if is_valid_board(): # ā Checks entire board at end
record_solution()
return
for col in range(n):
place_queen(row, col)
backtrack(row + 1)
remove_queen(row, col)
# Better: check validity before recursing
def fast_n_queens(n):
def backtrack(row):
if row == n:
record_solution()
return
for col in range(n):
if is_safe(row, col): # ā Prune early!
place_queen(row, col)
backtrack(row + 1)
remove_queen(row, col)
Pitfall 4: Incorrect base case
# Wrong: records incomplete solutions
def broken_permutations(nums):
result = []
def backtrack(current, remaining):
result.append(current.copy()) # ā Records all partial solutions!
for i in range(len(remaining)):
current.append(remaining[i])
backtrack(current, remaining[:i] + remaining[i+1:])
current.pop()
backtrack([], nums)
return result
# Correct: only record when complete
def fixed_permutations(nums):
result = []
def backtrack(current, remaining):
if not remaining: # ā Check if complete
result.append(current.copy())
return
# ...
Quick Reference Template
def backtracking_template(problem_input):
"""Template for backtracking problems."""
result = []
def backtrack(state, choices):
# Base case: solution complete
if is_complete(state):
result.append(state.copy()) # ā Remember to copy!
return
# Generate candidates
for choice in get_candidates(choices, state):
# Pruning: skip invalid choices
if not is_valid(choice, state):
continue
# Make choice (modify state)
make_choice(state, choice)
# Recurse with updated state
backtrack(state, updated_choices)
# Undo choice (backtrack)
undo_choice(state, choice)
# Initialize and start backtracking
initial_state = create_initial_state()
backtrack(initial_state, problem_input)
return result
Backtracking Checklist
When implementing backtracking, ensure:
- Base case: When is solution complete?
- Candidate generation: What choices are available?
- Validity check: Which choices are valid? (Pruning)
- Make choice: How to update state?
- Recurse: Call backtrack with updated state
- Undo choice: How to restore state? (Critical!)
- Copy results: Use
.copy()when recording solutions
Pattern Recognition Guide
Subset/Combination problems: - Include/exclude each element - Start index prevents duplicates - Example: Subsets, combination sum
Permutation problems: - Try each element in each position - Track used elements - Example: Permutations, N-Queens
Grid/Path problems: - Try all directions from current cell - Mark visited cells - Example: Word search, maze solving
Constraint satisfaction: - Try each value for current variable - Check constraints before recursing - Example: Sudoku, graph coloring